Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2020/09.25.14.27
%2 sid.inpe.br/sibgrapi/2020/09.25.14.27.50
%@doi 10.1109/SIBGRAPI51738.2020.00053
%T From explanations to feature selection: assessing SHAP values as feature selection mechanism
%D 2020
%A Marcílio-Jr, Wilson Estécio,
%A Eler, Danilo Medeiros,
%@affiliation São Paulo State University (UNESP) - Department of Mathematics and Computer Science, Presidente Prudente-SP
%@affiliation São Paulo State University (UNESP) - Department of Mathematics and Computer Science, Presidente Prudente-SP
%E Musse, Soraia Raupp,
%E Cesar Junior, Roberto Marcondes,
%E Pelechano, Nuria,
%E Wang, Zhangyang (Atlas),
%B Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)
%C Porto de Galinhas (virtual)
%8 7-10 Nov. 2020
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K feature selection, explainability.
%X Explainability has become one of the most discussed topics in machine learning research in recent years, and although a lot of methodologies that try to provide explanations to blackbox models have been proposed to address such an issue, little discussion has been made on the pre-processing steps involving the pipeline of development of machine learning solutions, such as feature selection. In this work, we evaluate a game-theoretic approach used to explain the output of any machine learning model, SHAP, as a feature selection mechanism. In the experiments, we show that besides being able to explain the decisions of a model, it achieves better results than three commonly used feature selection algorithms.
%@language en
%3 PID6618233.pdf


Close